2,614 research outputs found

    Randomized Algorithms for the Loop Cutset Problem

    Full text link
    We show how to find a minimum weight loop cutset in a Bayesian network with high probability. Finding such a loop cutset is the first step in the method of conditioning for inference. Our randomized algorithm for finding a loop cutset outputs a minimum loop cutset after O(c 6^k kn) steps with probability at least 1 - (1 - 1/(6^k))^c6^k, where c > 1 is a constant specified by the user, k is the minimal size of a minimum weight loop cutset, and n is the number of vertices. We also show empirically that a variant of this algorithm often finds a loop cutset that is closer to the minimum weight loop cutset than the ones found by the best deterministic algorithms known

    Cosmic Bulk Flow and the Local Motion from Cosmicflows-2

    Full text link
    Full sky surveys of peculiar velocity are arguably the best way to map the large scale structure out to distances of a few times 100 Mpc/h. Using the largest and most accurate ever catalog of galaxy peculiar velocities "Cosmicflows-2", the large scale structure has been reconstructed by means of the Wiener filter and constrained realizations assuming as a Bayesian prior model the LCDM model with the WMAP inferred cosmological parameters. The present paper focuses on studying the bulk flow of the local flow field, defined as the mean velocity of top-hat spheres with radii ranging out to R=500 Mpc/h. The estimated large scale structures, in general, and the bulk flow, in particular, are determined by the tension between the observational data and the assumed prior model. A prerequisite for such an analysis is the requirement that the estimated bulk flow is consistent with the prior model. Such a consistency is found here. At R=50(150) Mpc/h the estimated bulk velocity is 250+/-21 (239+/-38) km/s. The corresponding cosmic variance at these radii is 126(60)km/s, which implies that these estimated bulk flows are dominated by the data and not by the assumed prior model. The estimated bulk velocity is dominated by the data out to R~200 Mpc/h, where the cosmic variance on the individual Supergalactic Cartesian components (of the r.m.s. values) exceeds the variance of the Constrained Realizations by at least a factor of 2. The supergalactic SGX and SGY components of the CMB dipole velocity are recovered by the Wiener filter velocity field down to a very few km/s. The SGZ component of the estimated velocity, the one that is most affected by the Zone of Avoidance, is off by 126 km/s (an almost 2 sigma discrepancy).Comment: 10 pages, accepted for MNRA

    Fast Structuring of Radio Networks for Multi-Message Communications

    Full text link
    We introduce collision free layerings as a powerful way to structure radio networks. These layerings can replace hard-to-compute BFS-trees in many contexts while having an efficient randomized distributed construction. We demonstrate their versatility by using them to provide near optimal distributed algorithms for several multi-message communication primitives. Designing efficient communication primitives for radio networks has a rich history that began 25 years ago when Bar-Yehuda et al. introduced fast randomized algorithms for broadcasting and for constructing BFS-trees. Their BFS-tree construction time was O(Dlog2n)O(D \log^2 n) rounds, where DD is the network diameter and nn is the number of nodes. Since then, the complexity of a broadcast has been resolved to be TBC=Θ(DlognD+log2n)T_{BC} = \Theta(D \log \frac{n}{D} + \log^2 n) rounds. On the other hand, BFS-trees have been used as a crucial building block for many communication primitives and their construction time remained a bottleneck for these primitives. We introduce collision free layerings that can be used in place of BFS-trees and we give a randomized construction of these layerings that runs in nearly broadcast time, that is, w.h.p. in TLay=O(DlognD+log2+ϵn)T_{Lay} = O(D \log \frac{n}{D} + \log^{2+\epsilon} n) rounds for any constant ϵ>0\epsilon>0. We then use these layerings to obtain: (1) A randomized algorithm for gathering kk messages running w.h.p. in O(TLay+k)O(T_{Lay} + k) rounds. (2) A randomized kk-message broadcast algorithm running w.h.p. in O(TLay+klogn)O(T_{Lay} + k \log n) rounds. These algorithms are optimal up to the small difference in the additive poly-logarithmic term between TBCT_{BC} and TLayT_{Lay}. Moreover, they imply the first optimal O(nlogn)O(n \log n) round randomized gossip algorithm

    Goodness-of-fit analysis of the Cosmicflows-2 database of velocities

    Full text link
    The goodness-of-fit (GoF) of the Cosmicflows-2 (CF2) database of peculiar velocities with the LCDM standard model of cosmology is presented. Standard application of the Chi^2 statistics of the full database, of its 4,838 data points, is hampered by the small scale nonlinear dynamics which is not accounted for by the (linear regime) velocity power spectrum. The bulk velocity constitutes a highly compressed representation of the data which filters out the small scales non-linear modes. Hence the statistics of the bulk flow provides an efficient tool for assessing the GoF of the data given a model. The particular approach introduced here is to use the (spherical top-hat window) bulk velocity extracted from the Wiener filter reconstruction of the 3D velocity field as a linear low pass filtered highly compressed representation of the CF2 data. An ensemble 2250 random linear realizations of the WMAP/LCDM model has been used to calculate the bulk velocity auto-covariance matrix. We find that the CF2 data is consistent with the WMAP/LCDM model to better than the 2 sigma confidence limits. This provides a further validation that the CF2 database is consistent with the standard model of cosmology.Comment: submitted to MNRAS, V2 : solved page sizing proble

    The Arrowhead Mini-Supercluster of Galaxies

    Full text link
    Superclusters of galaxies can be defined kinematically from local evaluations of the velocity shear tensor. The location where the smallest eigenvalue of the shear is positive and maximal defines the center of a basin of attraction. Velocity and density fields are reconstructed with Wiener Filter techniques. Local velocities due to the density field in a restricted region can be separated from external tidal flows, permitting the identification of boundaries separating inward flows toward a basin of attraction and outward flows. This methodology was used to define the Laniakea Supercluster that includes the Milky Way. Large adjacent structures include Perseus-Pisces, Coma, Hercules, and Shapley but current kinematic data are insufficient to capture their full domains. However there is a small region trapped between Laniakea, Perseus-Pisces, and Coma that is close enough to be reliably characterized and that satisfies the kinematic definition of a supercluster. Because of its shape, it is given the name the Arrowhead Supercluster. This entity does not contain any major clusters. A characteristic dimension is ~25 Mpc and the contained mass is only ~10^15 Msun.Comment: Accepted for publication in The Astrophysical Journal. Video can be viewed at http://irfu.cea.fr/arrowhea

    Filaments from the galaxy distribution and from the velocity field in the local universe

    Full text link
    The cosmic web that characterizes the large-scale structure of the Universe can be quantified by a variety of methods. For example, large redshift surveys can be used in combination with point process algorithms to extract long curvilinear filaments in the galaxy distribution. Alternatively, given a full 3D reconstruction of the velocity field, kinematic techniques can be used to decompose the web into voids, sheets, filaments and knots. In this paper we look at how two such algorithms - the Bisous model and the velocity shear web - compare with each other in the local Universe (within 100 Mpc), finding good agreement. This is both remarkable and comforting, given that the two methods are radically different in ideology and applied to completely independent and different data sets. Unsurprisingly, the methods are in better agreement when applied to unbiased and complete data sets, like cosmological simulations, than when applied to observational samples. We conclude that more observational data is needed to improve on these methods, but that both methods are most likely properly tracing the underlying distribution of matter in the Universe.Comment: 6 Pages, 2 figures, Submitted to MNRAS Letter

    Hitting Diamonds and Growing Cacti

    Full text link
    We consider the following NP-hard problem: in a weighted graph, find a minimum cost set of vertices whose removal leaves a graph in which no two cycles share an edge. We obtain a constant-factor approximation algorithm, based on the primal-dual method. Moreover, we show that the integrality gap of the natural LP relaxation of the problem is \Theta(\log n), where n denotes the number of vertices in the graph.Comment: v2: several minor changes
    corecore